Conversation
## #️⃣연관된 이슈 > #55 ## 📝작업 내용 > 경험 조회 시 정렬 로직 수정 > Qdrant OrderBy 적용 ### 스크린샷 (선택) ## 💬리뷰 요구사항(선택) > 리뷰어가 특별히 봐주었으면 하는 부분이 있다면 작성해주세요 > > ex) 메서드 XXX의 이름을 더 잘 짓고 싶은데 혹시 좋은 명칭이 있을까요? <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## 릴리스 노트 * **성능 개선** * 생성일(created_at) 기반 인덱스 추가로 조회 성능 및 안정성 향상 * 경험 목록 서버측 정렬 적용으로 초기 데이터 정렬 비용 감소 * **개선 사항** * 타임존을 명시한 UTC 타임스탬프 적용으로 날짜 처리 정확성 강화 * 페이징 처리 로직 개선으로 페이지네이션 신뢰성 향상 <!-- end of auto-generated comment: release notes by coderabbit.ai -->
## #️⃣연관된 이슈 > #25 ## 📝작업 내용 > 분류 모델, 작성 모델을 분리하여 성능 개선, 검사 단축을 통하여 시간 개선 ### 스크린샷 (선택) ## 💬리뷰 요구사항(선택) > 리뷰어가 특별히 봐주었으면 하는 부분이 있다면 작성해주세요 > > ex) 메서드 XXX의 이름을 더 잘 짓고 싶은데 혹시 좋은 명칭이 있을까요? <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **새로운 기능** * Anthropic Claude 언어 모델 지원이 추가되었습니다. * Anthropic API 설정 옵션이 추가되었습니다. * **개선 사항** * 자기소개서 생성 프로세스가 개선되어 더 간소화되고 효율적으로 작동합니다. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
Walkthrough이 PR은 Anthropic Claude를 새로운 LLM 제공자로 통합하고, 스토리라인 기반의 복잡한 생성 파이프라인을 제거하여 경험 데이터 기반의 직접적인 초안 생성으로 단순화합니다. 또한 Apple OAuth에서 무작위 닉네임 생성을 지원하고, Qdrant 인덱싱과 타임존 처리를 개선합니다. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant LLMProvider
participant WritingLLM as Claude<br/>(writing_llm)
participant StreamingLLM as OpenAI<br/>(streaming_llm)
participant llm_service
Client->>llm_service: generate_draft_text(experiences, ...)
Note over llm_service: Format experiences<br/>into prompt
llm_service->>LLMProvider: writing_llm
LLMProvider->>WritingLLM: Initialize ChatAnthropic
WritingLLM-->>LLMProvider: Claude Instance
LLMProvider-->>llm_service: writing_llm instance
llm_service->>WritingLLM: invoke(formatted_prompt)
WritingLLM-->>llm_service: Draft text
llm_service->>StreamingLLM: stream(refined_prompt)
StreamingLLM-->>Client: Streamed chunks
sequenceDiagram
participant OAuth as Apple OAuth
participant AuthService
participant Utils as auth/utils
participant Database
OAuth->>AuthService: apple_oauth_flow(full_name=null)
alt full_name provided
AuthService->>AuthService: Use full_name
else full_name missing
AuthService->>Utils: generate_random_nickname()
Utils-->>AuthService: Random nickname<br/>(e.g., "Happy Tiger")
end
AuthService->>Database: _find_or_create_user(email, nickname)
alt User exists by email
Database-->>AuthService: Reactivate if inactive
else User not found
Database->>Database: Create new user
end
Database-->>AuthService: User instance
AuthService-->>OAuth: Authenticated user
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (5)
src/experience/service.py (1)
647-650: offset 기반 과다 fetch를 제한하는 방어 로직을 권장합니다.현재 구조는
offset + limit만큼 선조회 후 Python 슬라이싱이라, 큰 offset 요청에서 메모리/응답시간이 급격히 증가할 수 있습니다. 페이지 윈도우 상한을 두는 편이 안전합니다.제안 수정안
+ MAX_PAGE_WINDOW = 2000 + if (limit + offset) > MAX_PAGE_WINDOW: + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=f"Pagination window too large. limit+offset must be <= {MAX_PAGE_WINDOW}.", + ) + fetch_limit = min(limit + offset, total)Also applies to: 671-673
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/experience/service.py` around lines 647 - 650, Introduce a configurable page-window cap and apply it to the fetch sizing logic: define a PAGE_WINDOW_MAX (or PAGE_WINDOW_LIMIT) constant and change both usages of fetch_limit (the current calculation using fetch_limit = min(limit + offset, total)) to include this cap — e.g., fetch_limit = min(limit + offset, total, PAGE_WINDOW_MAX). Also add an early guard that handles excessively large offset values (either clamp offset = min(offset, PAGE_WINDOW_MAX - limit) or return a validation error) so callers cannot force a huge in-memory slice; update both occurrences (the fetch_limit assignment around fetch_limit = min(limit + offset, total) and the similar block at lines ~671-673) accordingly.src/chats/llm_service.py (3)
337-338:asyncio.create_task참조를 저장하지 않으면 예외가 손실될 수 있음태스크 참조를 저장하지 않으면 태스크 내에서 발생한 예외가 "Task exception was never retrieved" 경고만 남기고 조용히 무시될 수 있습니다. 현재 구조에서는 예외가 queue를 통해 전달되므로 큰 문제는 아니지만, 명시적으로 태스크를 추적하는 것이 더 안전합니다.
♻️ 제안: 태스크 참조 저장 및 예외 처리
- asyncio.create_task(_run()) + task = asyncio.create_task(_run()) + task.add_done_callback(lambda t: t.exception() if t.done() and not t.cancelled() else None)또는 더 간단하게 태스크 이름을 지정하여 디버깅을 용이하게:
- asyncio.create_task(_run()) + asyncio.create_task(_run(), name=f"ai_response_stream_{question_id}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/chats/llm_service.py` around lines 337 - 338, The background pipeline task created with asyncio.create_task(_run()) should be stored and its exceptions handled; change the call to save the Task (e.g., self._pipeline_task = asyncio.create_task(self._run(), name="llm_pipeline") or append to a tasks list) and attach a done-callback that checks task.exception() and logs/handles errors (or await it in shutdown). Locate the _run() invocation and replace the fire-and-forget call with storing the Task reference and adding a task.add_done_callback(lambda t: log and re-raise or log t.exception()) so exceptions aren't lost and debugging is easier.
148-154: 예외를 조용히 무시하면 디버깅이 어려워질 수 있음
get_experiences_by_ids에서 예외 발생 시continue로 넘어가면서 로깅이 없습니다. 일부 경험을 가져오지 못해도 계속 진행하는 것은 합리적이지만, 최소한 warning 로그를 남기는 것이 좋습니다.🔧 제안: 예외 로깅 추가
for exp_id in experience_ids: try: exp = get_experience(qdrant_client, exp_id, user_id) experiences.append(exp) - except Exception: + except Exception as e: + logger.warning(f"Failed to fetch experience {exp_id}: {e}") continue🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/chats/llm_service.py` around lines 148 - 154, In get_experiences_by_ids, don’t silently swallow exceptions in the loop over experience_ids; instead catch the exception from get_experience(qdrant_client, exp_id, user_id) into a variable and emit a warning log that includes the exp_id and the exception details (and stack trace if your logger supports it) before continuing to the next id so missing experiences are visible in logs; update the try/except block around get_experience to call the existing logger (or create one named logger) and include context like exp_id and user_id in the log message while still appending successful exp results to experiences.
99-104:mock_scores가 하드코딩되어 있음 - 기술 부채
mock_scores = [1.0, 0.7, 0.4]는 실제 적합도 점수가 아닌 고정된 mock 값입니다. 현재는 경험의 순서에 따라 임의의 점수가 할당되어 프롬프트에 표시됩니다. 실제 적합도 점수를 계산하거나, mock 값임을 명시적으로 표시하거나, 점수 표시를 제거하는 것을 고려해 주세요.♻️ 옵션 1: 점수 표시 제거
- mock_scores = [1.0, 0.7, 0.4] parts = [] for i, exp in enumerate(experiences): is_primary = (i == 0) - score = mock_scores[min(i, len(mock_scores) - 1)] - parts.append(_format_experience_with_role(exp, is_primary, score)) + parts.append(_format_experience_with_role(exp, is_primary))그리고
_format_experience_with_role에서 score 파라미터 제거:- def _format_experience_with_role(exp: Experience, is_primary: bool, score: float) -> str: + def _format_experience_with_role(exp: Experience, is_primary: bool) -> str: role_tag = "[주 경험 - 상세 서술]" if is_primary else "[보조 경험 - 간략 언급]" lines = [ - f"### {exp.title} {role_tag} (적합도: {score:.2f})", + f"### {exp.title} {role_tag}",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/chats/llm_service.py` around lines 99 - 104, The code currently assigns hardcoded mock_scores and passes them into _format_experience_with_role, which embeds fake suitability scores into prompts; remove this technical debt by eliminating score handling: delete mock_scores and the score argument when building parts in the loop (where experiences and parts are used), then update _format_experience_with_role to remove the score parameter and any score rendering logic, and adjust all other callers of _format_experience_with_role accordingly; alternatively, if you prefer to surface scores, replace mock_scores with a real scoring function (e.g., compute_experience_score(experience)) and call that instead, ensuring the signature changes are applied consistently.src/chats/dependencies.py (1)
54-65:OPENAI_TEMPERATURE를 Anthropic LLM에 사용하는 것은 혼란을 줄 수 있음
settings.OPENAI_TEMPERATURE를 Claude 모델에 사용하고 있습니다. 기능적으로는 동작하지만, 설정 이름이 오해를 불러일으킬 수 있습니다. 범용적인 이름(예:LLM_TEMPERATURE또는DEFAULT_TEMPERATURE)으로 변경하거나, Anthropic 전용 설정을 추가하는 것을 고려해 주세요.♻️ 제안: 범용 temperature 설정 사용
src/config.py에서:- OPENAI_TEMPERATURE: float = 0.7 + LLM_TEMPERATURE: float = 0.7 # 또는 OPENAI_TEMPERATURE와 ANTHROPIC_TEMPERATURE 분리그리고 이 파일에서:
self._writing_llm = ChatAnthropic( model=settings.ANTHROPIC_MODEL, api_key=settings.ANTHROPIC_API_KEY, - temperature=settings.OPENAI_TEMPERATURE, + temperature=settings.LLM_TEMPERATURE, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/chats/dependencies.py` around lines 54 - 65, The property writing_llm currently uses settings.OPENAI_TEMPERATURE which is misleading for an Anthropic model; update configuration to expose a clear temperature setting (either a generic LLM_TEMPERATURE/DEFAULT_TEMPERATURE or an Anthropic-specific ANTHROPIC_TEMPERATURE) in src/config.py and replace settings.OPENAI_TEMPERATURE with the new setting in the writing_llm property (and any other places that reference OPENAI_TEMPERATURE) so the code reads ChatAnthropic(..., temperature=settings.LLM_TEMPERATURE) or temperature=settings.ANTHROPIC_TEMPERATURE; ensure the new config key is documented/defaulted and update imports/typing if needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/auth/service.py`:
- Around line 131-137: The code in _find_or_create_user() currently reactivates
soft-deleted accounts by setting email_user.is_active = True during OAuth login;
change this so that if an email-matched user exists but is_active is False you
do not flip is_active or return the account: instead treat it as a
blocked/deleted account and return an explicit error or signal requiring an
explicit reactivation/re-signup flow; preserve the oauth_provider_id semantics
but do not auto-reactivate, and update callers of _find_or_create_user() to
handle the new “deleted account” response (e.g., raise a specific exception or
return (None, 'deleted_account')) so reactivation must go through
delete_user()/reactivation endpoint or explicit user consent flow.
In `@src/database.py`:
- Around line 89-91: The except block in src/database.py currently binds the
exception to a variable (`except Exception as e: raise`) but does nothing with
it, causing a lint F841; fix by removing the unnecessary binding or removing the
redundant catch entirely—either change `except Exception as e:` to `except
Exception:` if you intend to re-raise, or delete the try/except that only
re-raises; look for the try/except surrounding the code in this file and update
the `except` that references `e`.
In `@src/experience/models.py`:
- Line 53: Replace the misspelled tag string "밴치마킹" with the correct "벤치마킹" in
the tags list where it is defined (the tag/dictionary/choices variable in
src/experience/models.py that contains entries like "서비스기획", "PM/PO", ...);
update the exact string in that variable (e.g., TAG_CHOICES / TAGS / the model
field default list) and ensure any other occurrences in the same module are
changed so the tag name is consistent throughout the codebase.
In `@src/experience/service.py`:
- Around line 249-250: The docstring/prompt says “2 tags” but the code currently
allows 1–5 and truncates to 5; update the tag validation/normalization logic so
it enforces exactly 2 tags to match the requirement: locate the code that
validates/truncates the tags (the variable tags and functions that
sanitize/validate tags—e.g., any validate_tags, sanitize_tags, or where tags are
sliced like tags[:5]) and change it to reject inputs with length != 2 (or return
an explicit error) and remove the 5-item truncation; ensure any user-facing
prompt/docstring and tests are updated to reflect the strict 2-tag requirement.
---
Nitpick comments:
In `@src/chats/dependencies.py`:
- Around line 54-65: The property writing_llm currently uses
settings.OPENAI_TEMPERATURE which is misleading for an Anthropic model; update
configuration to expose a clear temperature setting (either a generic
LLM_TEMPERATURE/DEFAULT_TEMPERATURE or an Anthropic-specific
ANTHROPIC_TEMPERATURE) in src/config.py and replace settings.OPENAI_TEMPERATURE
with the new setting in the writing_llm property (and any other places that
reference OPENAI_TEMPERATURE) so the code reads ChatAnthropic(...,
temperature=settings.LLM_TEMPERATURE) or
temperature=settings.ANTHROPIC_TEMPERATURE; ensure the new config key is
documented/defaulted and update imports/typing if needed.
In `@src/chats/llm_service.py`:
- Around line 337-338: The background pipeline task created with
asyncio.create_task(_run()) should be stored and its exceptions handled; change
the call to save the Task (e.g., self._pipeline_task =
asyncio.create_task(self._run(), name="llm_pipeline") or append to a tasks list)
and attach a done-callback that checks task.exception() and logs/handles errors
(or await it in shutdown). Locate the _run() invocation and replace the
fire-and-forget call with storing the Task reference and adding a
task.add_done_callback(lambda t: log and re-raise or log t.exception()) so
exceptions aren't lost and debugging is easier.
- Around line 148-154: In get_experiences_by_ids, don’t silently swallow
exceptions in the loop over experience_ids; instead catch the exception from
get_experience(qdrant_client, exp_id, user_id) into a variable and emit a
warning log that includes the exp_id and the exception details (and stack trace
if your logger supports it) before continuing to the next id so missing
experiences are visible in logs; update the try/except block around
get_experience to call the existing logger (or create one named logger) and
include context like exp_id and user_id in the log message while still appending
successful exp results to experiences.
- Around line 99-104: The code currently assigns hardcoded mock_scores and
passes them into _format_experience_with_role, which embeds fake suitability
scores into prompts; remove this technical debt by eliminating score handling:
delete mock_scores and the score argument when building parts in the loop (where
experiences and parts are used), then update _format_experience_with_role to
remove the score parameter and any score rendering logic, and adjust all other
callers of _format_experience_with_role accordingly; alternatively, if you
prefer to surface scores, replace mock_scores with a real scoring function
(e.g., compute_experience_score(experience)) and call that instead, ensuring the
signature changes are applied consistently.
In `@src/experience/service.py`:
- Around line 647-650: Introduce a configurable page-window cap and apply it to
the fetch sizing logic: define a PAGE_WINDOW_MAX (or PAGE_WINDOW_LIMIT) constant
and change both usages of fetch_limit (the current calculation using fetch_limit
= min(limit + offset, total)) to include this cap — e.g., fetch_limit =
min(limit + offset, total, PAGE_WINDOW_MAX). Also add an early guard that
handles excessively large offset values (either clamp offset = min(offset,
PAGE_WINDOW_MAX - limit) or return a validation error) so callers cannot force a
huge in-memory slice; update both occurrences (the fetch_limit assignment around
fetch_limit = min(limit + offset, total) and the similar block at lines
~671-673) accordingly.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
docker-compose.local.ymlpyproject.tomlsrc/auth/service.pysrc/auth/utils.pysrc/chats/dependencies.pysrc/chats/llm_service.pysrc/chats/prompts.pysrc/config.pysrc/database.pysrc/experience/models.pysrc/experience/service.py
| if email_user: | ||
| if not email_user.is_active: | ||
| raise OAuthError("탈퇴한 계정입니다. 고객센터에 문의해주세요.") | ||
| email_user.is_active = True | ||
| session.add(email_user) | ||
| await session.commit() | ||
| await session.refresh(email_user) | ||
| return email_user, False |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify account deactivation/deletion semantics and whether is_active implies soft-deleted.
set -euo pipefail
echo "== User model fields related to active/deleted state =="
rg -n -C3 '\bis_active\b|\bdeleted_at\b' src/users src/auth
echo
echo "== User/account deletion or deactivation service paths =="
rg -n -C3 'delete|deactivate|withdraw|soft.?delete|is_active\s*=\s*False|deleted_at\s*=' src/users src/auth
echo
echo "== OAuth login paths that reactivate users =="
rg -n -C3 '_find_or_create_user|is_active\s*=\s*True' src/authRepository: Nexters/Logit-BE
Length of output: 10406
삭제된 계정을 OAuth 로그인 중에 자동으로 복구하면 사용자 탈퇴 정책을 위반합니다.
Lines 131-137에서 이메일로 매칭된 비활성(탈퇴) 계정을 사용자 동의 없이 자동으로 복구합니다. delete_user()는 is_active=False로 설정하여 soft-delete를 구현하지만, OAuth 로그인 시 _find_or_create_user()가 이를 자동으로 is_active=True로 되돌립니다. 이는 탈퇴한 계정의 모든 데이터에 대한 접근을 사용자 재인증 없이 복구하므로 삭제 정책을 무효화합니다. oauth_provider_id 보존은 무료 체험 재발급 방지(사기 차단)이지, 자동 계정 복구 권한이 아닙니다. 명시적인 사용자 재가입 또는 재활성화 흐름을 거치도록 수정하세요.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/auth/service.py` around lines 131 - 137, The code in
_find_or_create_user() currently reactivates soft-deleted accounts by setting
email_user.is_active = True during OAuth login; change this so that if an
email-matched user exists but is_active is False you do not flip is_active or
return the account: instead treat it as a blocked/deleted account and return an
explicit error or signal requiring an explicit reactivation/re-signup flow;
preserve the oauth_provider_id semantics but do not auto-reactivate, and update
callers of _find_or_create_user() to handle the new “deleted account” response
(e.g., raise a specific exception or return (None, 'deleted_account')) so
reactivation must go through delete_user()/reactivation endpoint or explicit
user consent flow.
| except Exception as e: | ||
| # Re-raise any non-Qdrant errors | ||
| raise |
There was a problem hiding this comment.
불필요한 예외 바인딩/재전파 블록을 정리해주세요.
Line 89의 except Exception as e: raise는 e를 사용하지 않고 동작도 추가하지 않아 lint 경고(F841)를 유발합니다.
제안 수정안
- except Exception as e:
- # Re-raise any non-Qdrant errors
- raise
+ except Exception:
+ # Re-raise any non-Qdrant errors
+ raise📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception as e: | |
| # Re-raise any non-Qdrant errors | |
| raise | |
| except Exception: | |
| # Re-raise any non-Qdrant errors | |
| raise |
🧰 Tools
🪛 Ruff (0.15.2)
[error] 89-89: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/database.py` around lines 89 - 91, The except block in src/database.py
currently binds the exception to a variable (`except Exception as e: raise`) but
does nothing with it, causing a lint F841; fix by removing the unnecessary
binding or removing the redundant catch entirely—either change `except Exception
as e:` to `except Exception:` if you intend to re-raise, or delete the
try/except that only re-raises; look for the try/except surrounding the code in
this file and update the `except` that references `e`.
| "서비스기획", "PM/PO", "사업개발", "전략기획", "시장분석", "지표설정(KPI/OKR)", "밴치마킹", "수익모델설계", | ||
| "콘텐츠제작", "퍼포먼스마케팅", "SNS운영", "광고집행", "검색최적화(SEO)", "CRM", "B2B/B2C영업", "제안서작성", | ||
| "UX/UI", "브랜딩", "그래픽디자인", "프로토타이핑", "디자인시스템", "영상편집", "모션그래픽", "3D모델링", "사용자테스트", | ||
| "서비스기획", "PM/PO", "사업개발", "전략기획", "시장분석", "지표설정", "밴치마킹", "수익모델설계", |
There was a problem hiding this comment.
태그 명칭 오타(밴치마킹) 수정이 필요합니다.
밴치마킹은 일반적으로 벤치마킹 표기를 사용합니다. 태그 사전 오타는 AI 추출/검색 품질과 사용자 표시 문구 품질에 영향을 줍니다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/experience/models.py` at line 53, Replace the misspelled tag string
"밴치마킹" with the correct "벤치마킹" in the tags list where it is defined (the
tag/dictionary/choices variable in src/experience/models.py that contains
entries like "서비스기획", "PM/PO", ...); update the exact string in that variable
(e.g., TAG_CHOICES / TAGS / the model field default list) and ensure any other
occurrences in the same module are changed so the tag name is consistent
throughout the codebase.
| List of relevant tags (2 tags) | ||
| """ |
There was a problem hiding this comment.
“태그 2개” 요구사항과 실제 검증 로직이 불일치합니다.
프롬프트/주석은 2개를 요구하지만, 실제 코드는 1~5개를 허용하고 최대 5개로 자릅니다. 요구사항 위반으로 랭킹 신호가 흔들릴 수 있습니다.
제안 수정안
-4. 예시: "백엔드, DB설계, 트러블슈팅, API연동"
+4. 예시: "백엔드, DB설계"
@@
- # Ensure 1-5 tags
- if not valid_tags:
- # Fallback: return empty list
- return []
- elif len(valid_tags) > 5:
- valid_tags = valid_tags[:5]
-
- return valid_tags
+ # Ensure exactly 2 tags
+ if len(valid_tags) < 2:
+ return []
+ return valid_tags[:2]Also applies to: 263-265, 289-295
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/experience/service.py` around lines 249 - 250, The docstring/prompt says
“2 tags” but the code currently allows 1–5 and truncates to 5; update the tag
validation/normalization logic so it enforces exactly 2 tags to match the
requirement: locate the code that validates/truncates the tags (the variable
tags and functions that sanitize/validate tags—e.g., any validate_tags,
sanitize_tags, or where tags are sliced like tags[:5]) and change it to reject
inputs with length != 2 (or return an explicit error) and remove the 5-item
truncation; ensure any user-facing prompt/docstring and tests are updated to
reflect the strict 2-tag requirement.
#️⃣연관된 이슈
📝작업 내용
스크린샷 (선택)
💬리뷰 요구사항(선택)
Summary by CodeRabbit
릴리스 노트
새로운 기능
개선 사항
버그 수정